3 research outputs found

    Look but don't touch: Visual cues to surface structure drive somatosensory cortex.

    Get PDF
    When planning interactions with nearby objects, our brain uses visual information to estimate shape, material composition, and surface structure before we come into contact with them. Here we analyse brain activations elicited by different types of visual appearance, measuring fMRI responses to objects that are glossy, matte, rough, or textured. In addition to activation in visual areas, we found that fMRI responses are evoked in the secondary somatosensory area (S2) when looking at glossy and rough surfaces. This activity could be reliably discriminated on the basis of tactile-related visual properties (gloss, rough, and matte), but importantly, other visual properties (i.e., coloured texture) did not substantially change fMRI activity. The activity could not be solely due to tactile imagination, as asking explicitly to imagine such surface properties did not lead to the same results. These findings suggest that visual cues to an object's surface properties evoke activity in neural circuits associated with tactile stimulation. This activation may reflect the a-priori probability of the physics of the interaction (i.e., the expectation of upcoming friction) that can be used to plan finger placement and grasp force.This project was supported by the Wellcome Trust (095183/Z/10/Z).This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.neuroimage.2015.12.05

    Mechanisms for extracting a signal from noise as revealed through the specificity and generality of task training.

    Get PDF
    Visual judgments critically depend on (1) the detection of meaningful items from cluttered backgrounds and (2) the discrimination of an item from highly similar alternatives. Learning and experience are known to facilitate these processes, but the specificity with which these processes operate is poorly understood. Here we use psychophysical measures of human participants to test learning in two types of commonly used tasks that target segmentation (signal-in-noise, or "coarse" tasks) versus the discrimination of highly similar items (feature difference, or "fine" tasks). First, we consider the processing of binocular disparity signals, examining performance on signal-in-noise and feature difference tasks after a period of training on one of these tasks. Second, we consider the generality of learning between different visual features, testing performance on both task types for displays defined by disparity, motion, or orientation. We show that training on a feature difference task also improves performance on signal-in-noise tasks, but only for the same visual feature. By contrast, training on a signal-in-noise task has limited benefits for fine judgments of the same feature but supports learning that generalizes to signal-in-noise tasks for other features. These findings indicate that commonly used signal-in-noise tasks require at least three distinct components: feature representations, signal-specific selection, and a generalized process that enhances segmentation. As such, there is clear potential to harness areas of commonality (both within and between cues) to improve impaired perceptual functions

    Training transfers the limits on perception from parietal to ventral cortex.

    Get PDF
    Visually guided behavior depends on (1) extracting and (2) discriminating signals from complex retinal inputs, and these perceptual skills improve with practice. For instance, training on aerial reconnaissance facilitated World War II Allied military operations; analysts pored over stereoscopic photographs, becoming expert at (1) segmenting pictures into meaningful items to break camouflage from (noisy) backgrounds, and (2) discriminating fine details to distinguish V-weapons from innocuous pylons. Training is understood to optimize neural circuits that process scene features (e.g., orientation) for particular purposes (e.g., judging position). Yet learning is most beneficial when it generalizes to other settings and is critical in recovery after adversity, challenging understanding of the circuitry involved. Here we used repetitive transcranial magnetic stimulation (rTMS) to infer the functional organization supporting learning generalization in the human brain. First, we show dissociable contributions of the posterior parietal cortex (PPC) versus lateral occipital (LO) circuits: extracting targets from noise is disrupted by PPC stimulation, in contrast to judging feature differences, which is affected by LO rTMS. Then, we demonstrate that training causes striking changes in this circuit: after feature training, identifying a target in noise is not disrupted by PPC stimulation but instead by LO stimulation. This indicates that training shifts the limits on perception from parietal to ventral brain regions and identifies a critical neural circuit for visual learning. We suggest that generalization is implemented by supplanting dynamic processing conducted in the PPC with specific feature templates stored in the ventral cortex.This is the final version. It was first published by Elsevier in Current Biology at http://www.sciencedirect.com/science/article/pii/S0960982214010719
    corecore